Network models are an essential block of modern networks. For example, they are widely used in network planning and optimization. However, as networks increase in scale and complexity, some models present limitations, such as the assumption of markovian traffic in queuing theory models, or the high computational cost of network simulators. Recent advances in machine learning, such as Graph Neural Networks (GNN), are enabling a new generation of network models that are data-driven and can learn complex non-linear behaviors. In this paper, we present RouteNet-Fermi, a custom GNN model that shares the same goals as queuing theory, while being considerably more accurate in the presence of realistic traffic models. The proposed model predicts accurately the delay, jitter, and loss in networks. We have tested RouteNet-Fermi in networks of increasing size (up to 300 nodes), including samples with mixed traffic profiles -- e.g., with complex non-markovian models -- and arbitrary routing and queue scheduling configurations. Our experimental results show that RouteNet-Fermi achieves similar accuracy as computationally-expensive packet-level simulators and it is able to accurately scale to large networks. For example, the model produces delay estimates with a mean relative error of 6.24% when applied to a test dataset with 1,000 samples, including network topologies one order of magnitude larger than those seen during training.
translated by 谷歌翻译
Massive data corpora like WebText, Wikipedia, Conceptual Captions, WebImageText, and LAION have propelled recent dramatic progress in AI. Large neural models trained on such datasets produce impressive results and top many of today's benchmarks. A notable omission within this family of large-scale datasets is 3D data. Despite considerable interest and potential applications in 3D vision, datasets of high-fidelity 3D models continue to be mid-sized with limited diversity of object categories. Addressing this gap, we present Objaverse 1.0, a large dataset of objects with 800K+ (and growing) 3D models with descriptive captions, tags, and animations. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category. We demonstrate the large potential of Objaverse via four diverse applications: training generative 3D models, improving tail category segmentation on the LVIS benchmark, training open-vocabulary object-navigation models for Embodied AI, and creating a new benchmark for robustness analysis of vision models. Objaverse can open new directions for research and enable new applications across the field of AI.
translated by 谷歌翻译
Text-guided image editing can have a transformative impact in supporting creative applications. A key challenge is to generate edits that are faithful to input text prompts, while consistent with input images. We present Imagen Editor, a cascaded diffusion model built, by fine-tuning Imagen on text-guided image inpainting. Imagen Editor's edits are faithful to the text prompts, which is accomplished by using object detectors to propose inpainting masks during training. In addition, Imagen Editor captures fine details in the input image by conditioning the cascaded pipeline on the original high resolution image. To improve qualitative and quantitative evaluation, we introduce EditBench, a systematic benchmark for text-guided image inpainting. EditBench evaluates inpainting edits on natural and generated images exploring objects, attributes, and scenes. Through extensive human evaluation on EditBench, we find that object-masking during training leads to across-the-board improvements in text-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and Stable Diffusion -- and, as a cohort, these models are better at object-rendering than text-rendering, and handle material/color/size attributes better than count/shape attributes.
translated by 谷歌翻译
This paper describes the system developed at the Universitat Polit\`ecnica de Catalunya for the Workshop on Machine Translation 2022 Sign Language Translation Task, in particular, for the sign-to-text direction. We use a Transformer model implemented with the Fairseq modeling toolkit. We have experimented with the vocabulary size, data augmentation techniques and pretraining the model with the PHOENIX-14T dataset. Our system obtains 0.50 BLEU score for the test set, improving the organizers' baseline by 0.38 BLEU. We remark the poor results for both the baseline and our system, and thus, the unreliability of our findings.
translated by 谷歌翻译
Training effective embodied AI agents often involves manual reward engineering, expert imitation, specialized components such as maps, or leveraging additional sensors for depth and localization. Another approach is to use neural architectures alongside self-supervised objectives which encourage better representation learning. In practice, there are few guarantees that these self-supervised objectives encode task-relevant information. We propose the Scene Graph Contrastive (SGC) loss, which uses scene graphs as general-purpose, training-only, supervisory signals. The SGC loss does away with explicit graph decoding and instead uses contrastive learning to align an agent's representation with a rich graphical encoding of its environment. The SGC loss is generally applicable, simple to implement, and encourages representations that encode objects' semantics, relationships, and history. Using the SGC loss, we attain significant gains on three embodied tasks: Object Navigation, Multi-Object Navigation, and Arm Point Navigation. Finally, we present studies and analyses which demonstrate the ability of our trained representation to encode semantic cues about the environment.
translated by 谷歌翻译
最近在自动手语理解中的具有挑战性的任务(例如手语识别,翻译和生产)方面取得了重大进展。但是,这些作品集中在相对较少的样本,简短录音以及有限的词汇和签名空间的数据集上。在这项工作中,我们介绍了手语主题检测的新颖任务。我们基于跨越多个语义域的大规模视频数据集的2sign的实验。我们为主题检测的任务提供了强大的基础,并在手语领域常用的不同视觉特征之间进行了比较。
translated by 谷歌翻译
人类语言中发现的最强大的模式之一是ZIPF的缩写定律,即更短的单词的趋势。自ZIPF开创性研究以来,该定律被视为压缩的体现,即形式的长度最小化 - 自然交流的普遍原则。尽管对语言进行优化的说法已经变得时尚,但衡量语言优化程度的尝试却相当稀缺。在这里,我们证明压缩在无例外的大量语言中表现出来,并且独立于测量单位。这两个单词长度都可以在书面语言的字符以及口语的持续时间中检测到。此外,为了衡量优化程度,我们得出了一个随机基线的简单公式,并提出了两个分数归一化的分数,即,它们相对于最小值和随机基线都进行了归一化。我们分析了这些和其他分数的理论和统计优势和缺点。利用最佳分数,我们首次量化了语言中单词长度的最佳程度。这表明当单词长度以字符测量时,语言平均被优化至62%或67%(取决于源),当单词长度及时测量时,平均而言,平均而言,平均而言,平均而言,平均而言,平均而言,平均至65%。通常,口语持续时间比字符中的书面单词长度更优化。除了这里报告的分析外,我们的工作还铺平了衡量其他物种发声或手势的最佳程度的方法,并将其与书面,口语或签名的人类语言进行比较。
translated by 谷歌翻译
我们将视频Swin Transformer作为基础体系结构实现,用于无返回时间定位和对象状态变更分类的任务。我们的方法在两个挑战上都取得了竞争性能。
translated by 谷歌翻译
无线电访问网络(RAN)切片中的容量共享问题与各种式式切片之间可用的容量的分配,以满足其交通需求并有效地使用无线电资源。尽管文献中已经提出了几种能力共享算法解决方案,但它们的实际实施仍然是差距。在本文中,讨论了基于增强学习的能力共享算法对O-RAN体系结构的实施,从而提供了有关涉及接口的操作和解决方案容器化的见解。此外,还包括对解决方案进行验证的测试床的描述,并提供了一些性能和验证结果。
translated by 谷歌翻译
自动语音识别(ASR)是新服务的关键元素,可帮助用户与自动化系统进行交互。深度学习方法使得用单词错误率低于5%的英语ASR部署系统成为可能。但是,这些方法的使用仅适用于具有数百或数千小时音频及其相应转录的语言。为了使所谓的低资源语言加快可以改善其ASR系统性能的资源的可用性,正在研究基于现有的资源来创建新资源的方法。在本文中,我们描述了我们的数据增强方法,以改善低资源和凝集性语言的ASR模型的结果。我们使用Wav2letter ++模型进行了为Quechua开发ASR的实验。通过我们的基本模型方法,我们将WER降低了8.73%。由此产生的ASR模型获得了22.75%的WER,并接受了99小时的原始资源和99小时的合成数据的培训,并结合了文本增强和合成语音发电
translated by 谷歌翻译